210 research outputs found
Low-resolution face alignment and recognition using mixed-resolution classifiers
A very common case for law enforcement is recognition of suspects from a long distance or in a crowd. This is an important application for low-resolution face recognition (in the authors' case, face region below 40 Ă 40 pixels in size). Normally, high-resolution images of the suspects are used as references, which will lead to a resolution mismatch of the target and reference images since the target images are usually taken at a long distance and are of low resolution. Most existing methods that are designed to match high-resolution images cannot handle low-resolution probes well. In this study, they propose a novel method especially designed to compare low-resolution images with high-resolution ones, which is based on the log-likelihood ratio (LLR). In addition, they demonstrate the difference in recognition performance between real low-resolution images and images down-sampled from high-resolution ones. Misalignment is one of the most important issues in low-resolution face recognition. Two approaches - matching-score-based registration and extended training of images with various alignments - are introduced to handle the alignment problem. Their experiments on real low-resolution face databases show that their methods outperform the state-of-the-art
Interpretable ODE-style Generative Diffusion Model via Force Field Construction
For a considerable time, researchers have focused on developing a method that
establishes a deep connection between the generative diffusion model and
mathematical physics. Despite previous efforts, progress has been limited to
the pursuit of a single specialized method. In order to advance the
interpretability of diffusion models and explore new research directions, it is
essential to establish a unified ODE-style generative diffusion model. Such a
model should draw inspiration from physical models and possess a clear
geometric meaning. This paper aims to identify various physical models that are
suitable for constructing ODE-style generative diffusion models accurately from
a mathematical perspective. We then summarize these models into a unified
method. Additionally, we perform a case study where we use the theoretical
model identified by our method to develop a range of new diffusion model
methods, and conduct experiments. Our experiments on CIFAR-10 demonstrate the
effectiveness of our approach. We have constructed a computational framework
that attains highly proficient results with regards to image generation speed,
alongside an additional model that demonstrates exceptional performance in both
Inception score and FID score. These results underscore the significance of our
method in advancing the field of diffusion models.Comment: find some mistake for the result
Low-resolution face recognition and the importance of proper alignment
Face recognition methods for low resolution are often developed and tested on down-sampled images instead of on real low-resolution images. Although there is a growing awareness that down-sampled and real low-resolution images are different, few efforts have been made to analyse the differences in recognition performance. Here, the authors explore the differences and demonstrate that alignment is a major cause, especially in the absence of pose and illumination variations. The authors found that the recognition performances on down-sampled images are flattered mostly due to the fact that the images are perfectly aligned before down-sampling using high-resolution landmarks, while the real low-resolution images have much poorer alignment. To obtain better alignment for real low-resolution images, the authors apply matching score-based registration which does not rely on accurate landmarks. The authors propose to divide low resolution into three ranges to harmonise the terminology: upper low resolution (ULR), moderately low resolution (MLR), and very low resolution (VLR). Most face recognition methods perform well on ULR. MLR is a challenge for commercial systems, but a low-resolution deep-learning method can handle it very well. The performance of most methods degrades significantly for VLR, except for simple holistic methods which perform the best
An evaluation of super-resolution for face recognition
We evaluate the performance of face recognition algorithms on images at various resolutions. Then we show to what extent super-resolution (SR) methods can improve the recognition performance when comparing low-resolution (LR) to high-resolution (HR) facial images. Our experiments use both synthetic data (from the FRGC v1.0 database) and surveillance images (from the SCface database). Three face recognition methods are used, namely Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Local Binary Patterns (LBP). Two SR methods are evaluated. The first method learns the mapping between LR images and the corresponding HR images using a regression model. As a result, the reconstructed SR images are close to the HR images that belong to the same subject and far away from others. The second method compares LR and HR facial images without explicitly constructing SR images. It finds a coherent feature space where the correlation of LR and HR is maximum, and then compute the mapping from LR to HR in this feature space. The performance of the two SR methods are compared to that delivered by the standard face recognition without SR. The results show that LDA is mostly robust to resolution changes while LBP is not suitable for the recognition of LR images. SR methods improve the recognition accuracy when downsampled images are used and the first method provides better results than the second one. However, the improvement for realistic LR surveillance images remains limited
Optimizing microbial- and enzyme-induced carbonate precipitation treatment regimes to improve the performance of recycled aggregate concrete
Recycled aggregate concrete (RAC) typically suffers from inferior properties due to old mortar on the surface of recycled aggregate (RA), and the practical application of two proposed treatment methods, microbial-induced carbonate precipitation (MICP) and enzyme-induced carbonate precipitation (EICP), has encountered challenges in determining optimal culture medium and precipitation regimes. This study initially aimed to address these challenges by establishing the feasibility of using chloride-free cultivation medium to avoid introducing chloride ions that could damage the steel reinforcement. The optimal Ca concentration in the precipitation culture medium was determined as 0.3 mol/L for MICP and 0.5 mol/L for EICP. Furthermore, the optimal precipitation regimes for MICP and EICP treatments were identified as I-S (5 cycles) and M-S (3 cycles), respectively. The quantitative evaluation of the above factors enabled the direct practical application of these optimal treatment regimes. The performance of RAC was significantly improved after both MICP and EICP treatments compared to untreated RAC, with EICP treatment demonstrating superior performance. The precipitated CaCO3 formed during MICP treatment consisted mainly of spherical vaterite crystals, while the precipitation formed during EICP treatment comprised vaterite, calcite, and aragonite. These differences in phase and mechanism between MICP and EICP treatments could explain the variations in the performance of RAC.</p
Density Matters: Improved Core-set for Active Domain Adaptive Segmentation
Active domain adaptation has emerged as a solution to balance the expensive
annotation cost and the performance of trained models in semantic segmentation.
However, existing works usually ignore the correlation between selected samples
and its local context in feature space, which leads to inferior usage of
annotation budgets. In this work, we revisit the theoretical bound of the
classical Core-set method and identify that the performance is closely related
to the local sample distribution around selected samples. To estimate the
density of local samples efficiently, we introduce a local proxy estimator with
Dynamic Masked Convolution and develop a Density-aware Greedy algorithm to
optimize the bound. Extensive experiments demonstrate the superiority of our
approach. Moreover, with very few labels, our scheme achieves comparable
performance to the fully supervised counterpart
Scaling the Ion Inertial Length and Its Implications for Modeling Reconnection in Global Simulations
We investigate the use of artificially increased ion and electron kinetic scales in global plasma simulations. We argue that as long as the global and ion inertial scales remain well separated, (1) the overall global solution is not strongly sensitive to the value of the ion inertial scale, while (2) the ion inertial scale dynamics will also be similar to the original system, but it occurs at a larger spatial scale, and (3) structures at intermediate scales, such as magnetic islands, grow in a selfâsimilar manner. To investigate the validity and limitations of our scaling hypotheses, we carry out many simulations of a twoâdimensional magnetosphere with the magnetohydrodynamics with embedded particleâinâcell (MHDâEPIC) model. The PIC model covers the dayside reconnection site. The simulation results confirm that the hypotheses are true as long as the increased ion inertial length remains less than about 5% of the magnetopause standoff distance. Since the theoretical arguments are general, we expect these results to carry over to three dimensions. The computational cost is reduced by the third and fourth powers of the scaling factor in twoâ and threeâdimensional simulations, respectively, which can be many orders of magnitude. The present results suggest that global simulations that resolve kinetic scales for reconnection are feasible. This is a crucial step for applications to the magnetospheres of Earth, Saturn, and Jupiter and to the solar corona.Key PointsThe effects of artificially increased kinetic scales are studied with MHDâEPIC simulationsChanging the kinetic scales does not change the global solution significantlyIncreasing the kinetic scales makes global simulations with embedded kinetic regions feasiblePeer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/140018/1/jgra53815_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/140018/2/jgra53815.pd
Prototypical Contrast Adaptation for Domain Adaptive Semantic Segmentation
Unsupervised Domain Adaptation (UDA) aims to adapt the model trained on the
labeled source domain to an unlabeled target domain. In this paper, we present
Prototypical Contrast Adaptation (ProCA), a simple and efficient contrastive
learning method for unsupervised domain adaptive semantic segmentation.
Previous domain adaptation methods merely consider the alignment of the
intra-class representational distributions across various domains, while the
inter-class structural relationship is insufficiently explored, resulting in
the aligned representations on the target domain might not be as easily
discriminated as done on the source domain anymore. Instead, ProCA incorporates
inter-class information into class-wise prototypes, and adopts the
class-centered distribution alignment for adaptation. By considering the same
class prototypes as positives and other class prototypes as negatives to
achieve class-centered distribution alignment, ProCA achieves state-of-the-art
performance on classical domain adaptation tasks, {\em i.e., GTA5
Cityscapes \text{and} SYNTHIA Cityscapes}. Code is available at
\href{https://github.com/jiangzhengkai/ProCA}{ProCA
- âŠ